DeepMind researchers propose a new 'streams' approach to AI development, focusing on experiential learning and autonomous interaction with the world, moving beyond the limitations of current large language models and potentially surpassing human intelligence.
Newsweek interview with Yann LeCun, Meta's chief AI scientist, detailing his skepticism of current LLMs and his focus on Joint Embedding Predictive Architecture (JEPA) as the future of AI, emphasizing world modeling and planning capabilities.
This article examines the dual nature of Generative AI in cybersecurity, detailing how it can be exploited by cybercriminals and simultaneously used to enhance defenses. It covers the history of AI, the emergence of GenAI, potential threats, and mitigation strategies.
AlexNet, a groundbreaking neural network developed in 2012 by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, has been released in source code form by the Computer History Museum in collaboration with Google. This model significantly advanced the field of AI by demonstrating a massive leap in image recognition capabilities.
ByteDance Research has released DAPO (Dynamic Sampling Policy Optimization), an open-source reinforcement learning system for LLMs, aiming to improve reasoning abilities and address reproducibility issues. DAPO includes innovations like Clip-Higher, Dynamic Sampling, Token-level Policy Gradient Loss, and Overlong Reward Shaping, achieving a score of 50 on the AIME 2024 benchmark with the Qwen2.5-32B model.
AAAI survey finds that most respondents are sceptical that the technology underpinning large-language models is sufficient for artificial general intelligence.
>"More than three-quarters of respondents said that enlarging current AI systems ― an approach that has been hugely successful in enhancing their performance over the past few years ― is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably cannot match or surpass human intelligence. And the very pursuit of these capabilities also provokes scepticism: less than one-quarter of respondents said that achieving AGI should be the core mission of the AI research community.
Scientists are exploring the capabilities of the DeepSeek-R1 AI model, released by a Chinese firm. This open and cost-effective model performs comparably to industry leaders in solving mathematical and scientific problems. Researchers are leveraging its accessibility to create custom models for specific disciplines, although it still struggles with some tasks.
A detailed explanation of the Transformer model, a key architecture in modern deep learning for tasks like neural machine translation, focusing on components like self-attention, encoder and decoder stacks, positional encoding, and training.
The article discusses the resurgence of programming languages designed specifically for AI development, highlighting Mojo as a promising example. It explores the historical context of AI-focused languages, the limitations of Python for AI, and the features and benefits of Mojo and other emerging AI languages.
Join 600,000+ readers and get the rundown on the latest developments in AI before everyone else.